Skip to yearly menu bar Skip to main content



Invited Talks
Saiph Savage
<div class="supplemental-html"> <ul style="list-style-type: none; line-height:1em; font-size:.9em; color:#666;padding: 5px !important;"> <li>Moderator: Finale Doshi-Velez </li> <li>On-demand video (45 minutes)</li> <li>Live Q&A (10 min)</li> <li>Break (5 min)</li> <li>Ask Me Anything Chat (up to an hour)</li> </ul> </div>

The A.I. industry has created new jobs that have been essential to the real-world deployment of intelligent systems. These new jobs typically focus on labeling data for machine learning models or having workers complete tasks that A.I. alone cannot do. Human labor with A.I. has powered a futuristic reality where self-driving cars and voice assistants are now commonplace. However, the workers powering our A.I. industry are often invisible to consumers. Together, this has facilitated a reality where these invisible workers are often paid below minimum wage and have limited career growth opportunities. In this talk, I will present how we can design a future of work for empowering the invisible workers behind our A.I. I propose a framework that transforms invisible A.I. labor into opportunities for skill growth, hourly wage increase, and facilitates transitioning to new creative jobs that are unlikely to be automated in the future. Taking inspiration from social theories on solidarity and collective action, my framework introduces two new techniques for creating career ladders within invisible A.I. labor: a) Solidarity Blockers, computational methods that use solidarity to collectively organize workers to help each other to build new skills while completing invisible labor; and b) Entrepreneur Blocks, computational …

(Posner Lecture)
Chris Bishop
<div class="supplemental-html"> <ul style="list-style-type: none; line-height:1em; font-size:.9em; color:#666;padding: 5px !important;"> <li>Moderator: Chris Williams</li> <li>On-demand video (45 minutes)</li> <li>Live Q&A (10 min)</li> <li>Break (5 min)</li> <li>Ask Me Anything Chat (up to an hour)</li> </ul> </div>

The two long-held aspirations to understand the mechanisms of human intelligence, and to recreate such intelligence in machines, have inspired many of us to build our careers in the field of machine learning. However, while the creation of technologies supporting general intelligence would be truly revolutionary, such an achievement still seems to lie well into the future. Meanwhile, another profound revolution, also built on machine learning, is already unfolding and is set to transform almost every aspect of our lives. In this talk I will highlight the nature of this revolution and why the coming decade will be a hugely exciting, and critically important, time to engage deeply in machine learning for those who want to have a truly transformational impact in the real world.

Charles Isbell
<div class="supplemental-html"> <ul style="list-style-type: none; line-height:1em; font-size:.9em; color:#666;padding: 5px !important;"> <li>Moderator: Peter Stone</li> <li>On-demand video (45 minutes)</li> <li>Live Q&A (10 min)</li> <li>Break (5 min)</li> <li>Ask Me Anything Chat (up to an hour)</li> </ul> </div>

Successful technological fields have a moment when they become pervasive, important, and noticed. They are deployed into the world and, inevitably, something goes wrong. A badly designed interface leads to an aircraft disaster. A buggy controller delivers a lethal dose of radiation to a cancer patient. The field must then choose to mature and take responsibility for avoiding the harms associated with what it is producing. Machine learning has reached this moment.

In this talk, I will argue that the community needs to adopt systematic approaches for creating robust artifacts that contribute to larger systems that impact the real human world. I will share perspectives from multiple researchers in machine learning, theory, computer perception, and education; discuss with them approaches that might help us to develop more robust machine-learning systems; and explore scientifically interesting problems that result from moving beyond narrow machine-learning algorithms to complete machine-learning systems.

Anthony M Zador
<div class="supplemental-html"> <ul style="list-style-type: none; line-height:1em; font-size:.9em; color:#666;padding: 5px !important;"> <li>Moderator: Blake Richards </li> <li>On-demand video (45 minutes)</li> <li>Live Q&A (10 min)</li> <li>Break (5 min)</li> <li>Ask Me Anything Chat (up to an hour)</li> </ul> </div>

Many animals are born with impressive innate capabilities. At birth, a spider can build a web, a colt can stand, and a whale can swim. From an evolutionary perspective, it is easy to see how innate abilities could be selected for: Those individuals that can survive beyond their most vulnerable early hours, days or weeks are more likely to survive until reproductive age, and attain reproductive age sooner. I argue that most animal behavior is not the result of clever learning algorithms, but is encoded in the genome. Specifically, animals are born with highly structured brain connectivity, which enables them to learn very rapidly. Because the wiring diagram is far too complex to be specified explicitly in the genome, it must be compressed through a “genomic bottleneck,” which serves as a regularizer. The genomic bottleneck suggests a path toward architectures capable of rapid learning.

Shafi Goldwasser
<div class="supplemental-html"> <ul style="list-style-type: none; line-height:1em; font-size:.9em; color:#666;padding: 5px !important;"> <li>Moderator: Avrim Blum </li> <li>On-demand video (45 minutes)</li> <li>Live Q&A (10 min)</li> <li>Break (5 min)</li> <li>Ask Me Anything Chat (up to an hour)</li> </ul> </div>

We will present cryptography inspired models and results to address three challenges that emerge when worst-case adversaries enter the machine learning landscape. These challenges include verification of machine learning models given limited access to good data, training at scale on private training data, and robustness against adversarial examples controlled by worst case adversaries.

(Breiman Lecture)
Marloes Maathuis
<div class="supplemental-html"> <ul style="list-style-type: none; line-height:1em; font-size:.9em; color:#666;padding: 5px !important;"> <li>Moderator: Bernhard Schölkopf </li> <li>On-demand video (45 minutes)</li> <li>Live Q&A (10 min)</li> <li>Break (5 min)</li> <li>Ask Me Anything Chat (up to an hour)</li> </ul> </div>

Causal reasoning is important in many areas, including the sciences, decision making and public policy. The gold standard method for determining causal relationships uses randomized controlled perturbation experiments. In many settings, however, such experiments are expensive, time consuming or impossible. Hence, it is worthwhile to obtain causal information from observational data, that is, from data obtained by observing the system of interest without subjecting it to interventions. In this talk, I will discuss approaches for causal learning from observational data, paying particular attention to the combination of causal structure learning and variable selection, with the aim of estimating causal effects. Throughout, examples will be used to illustrate the concepts.

Jeff Shamma
<div class="supplemental-html"> <ul style="list-style-type: none; line-height:1em; font-size:.9em; color:#666;padding: 5px !important;"> <li>Moderator: Michael Littman</li> <li>On-demand video (45 minutes)</li> <li>Live Q&A (10 min)</li> <li>Break (5 min)</li> <li>Ask Me Anything Chat (up to an hour)</li> </ul> </div>

The impact of feedback control is extensive. It is deployed in a wide array of engineering domains, including aerospace, robotics, automotive, communications, manufacturing, and energy applications, with super-human performance having been achieved for decades. Many settings in learning involve feedback interconnections, e.g., reinforcement learning has an agent in feedback with its environment, and multi-agent learning has agents in feedback with each other. By explicitly recognizing the presence of a feedback interconnection, one can exploit feedback control perspectives for the analysis and synthesis of such systems, as well as investigate trade-offs in fundamental limitations of achievable performance inherent in all feedback control systems. This talk highlights selected feedback control concepts—in particular robustness, passivity, tracking, and stabilization—as they relate to specific questions in evolutionary game theory, no-regret learning, and multi-agent learning.